首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1835篇
  免费   61篇
  国内免费   18篇
电工技术   25篇
技术理论   1篇
综合类   46篇
化学工业   36篇
金属工艺   4篇
机械仪表   23篇
建筑科学   188篇
矿业工程   12篇
能源动力   302篇
轻工业   42篇
水利工程   7篇
石油天然气   17篇
武器工业   1篇
无线电   79篇
一般工业技术   241篇
冶金工业   13篇
原子能技术   1篇
自动化技术   876篇
  2024年   1篇
  2023年   92篇
  2022年   43篇
  2021年   91篇
  2020年   131篇
  2019年   89篇
  2018年   45篇
  2017年   171篇
  2016年   144篇
  2015年   102篇
  2014年   178篇
  2013年   125篇
  2012年   85篇
  2011年   117篇
  2010年   71篇
  2009年   65篇
  2008年   31篇
  2007年   64篇
  2006年   37篇
  2005年   40篇
  2004年   22篇
  2003年   20篇
  2002年   23篇
  2001年   31篇
  2000年   21篇
  1999年   19篇
  1998年   12篇
  1997年   14篇
  1996年   11篇
  1995年   1篇
  1994年   3篇
  1993年   3篇
  1990年   1篇
  1988年   2篇
  1983年   1篇
  1982年   2篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
  1960年   1篇
排序方式: 共有1914条查询结果,搜索用时 490 毫秒
11.
In this paper a new estimation approach combining both Recursive Least Square (RLS) and Bacterial Foraging Optimization (BFO) is developed for accurate estimation of harmonics in distorted power system signals. The proposed RLS–BFO hybrid technique has been employed for estimating the fundamental as well as harmonic components present in power system voltage/current waveforms. The basic foraging strategy is made adaptive by using RLS that sequentially updates the unknown parameters of the signal. Simulation and experimental studies are included justifying the improvement in performance of this new estimation algorithm.  相似文献   
12.
Nowadays, multiagent systems have become a widely used technology in everyday life, and many authors have adopted the view of communication or interaction between agents as a joint activity regulated by means of dialogue games. Dialogue games are a set of communication rules that agents can combine in their complex interactions. In these games, uncertainty is an important problem that each agent faces when making decisions, especially in the absence of enough information. This paper focuses on the uncertainty in a particular type of dialogue games, namely argumentation-based negotiation. There exist several proposals on this type of dialogue games in the literature, and most of them are concerned with proposing protocols to show how agents can communicate with each other, and how arguments and offers can be generated, evaluated and exchanged. Nevertheless, none of them is directly targeting the agents’ uncertainty about the exchanged arguments and how this uncertainty could be measured at each dialogue step to assist those agents make better decisions. The aim of this paper is to tackle this problem by defining a new set of uncertainty measures in negotiation dialogue games from an external agent’s point of view. In particular, we introduce two types of uncertainty: Type I and Type II. Type I is about the uncertainty index of playing the right move. For this, we use Shannon entropy to measure: (i) the uncertainty index of the agent that he is selecting the right move at each dialogue step; and (ii) the uncertainty index of participating agents in the negotiation about the whole dialogue. This is done in two different ways; the first is by taking the average of the uncertainty index of all moves, and the second is by determining all possible dialogues and applying the general formula of Shannon entropy. Type II is about the uncertainty degree of the agent that the move will be accepted by the addressee. In this context, we introduce a new classification for the arguments based on their certainty to be accepted by the addressee.  相似文献   
13.
The Job-Shop Scheduling Problem (JSSP) is well known for its complexity as an NP-hard disjunctive scheduling problem. The problem addressed in this paper is JSSPs with an objective of minimizing makespan while satisfying a number of hard constraints. An efficient GRASP × ELS approach is introduced for solving this problem. The efficiency is evaluated using the widely known 40 Laurence’s instances which encompass medium and large scale instances. The computational results prove that the proposed method competes with the best published methods in both quality of results and computational time. Recently, Web services have generated great interest in researchers. Such application architecture is based on the client–server model using existing Internet protocols and open standards. It provides new approaches to optimization methods. The proposed GRASP × ELS is packaged into a Web Service (WS), i.e., it offers for the research community an open access to our optimization approach. Moreover, the proposed web service can be even included in research future works with a very small programming effort.To favor utilization of the web service and to prove the facility in which the service could be used, we provide an example in Java proving that it is possible to obtain in less than 10 min a client application using the different methods exposed by this web service. Such usage extends to classical library inclusion in program with the difference that a method is called in the client side and represents an execution on the server.The Web Service paradigm is a new approach in spreading algorithms and therefore this paper stands at the crossroads of optimization research community and the web service community expectations. The GRASP × ELS provided in the web service, is a state of the art method which competes with previously published ones and which has the advantage of being available for free, in any languages, everywhere contributing in spreading operational research contribution.  相似文献   
14.
In order to offer context-aware and personalized information, intelligent processing techniques are necessary. Different initiatives considering many contexts have been proposed, but users preferences need to be learned to offer contextualized and personalized services, products or information. Therefore, this paper proposes an agent-based architecture for context-aware and personalized event recommendation based on ontology and the spreading algorithm. The use of ontology allows to define the domain knowledge model, while the spreading activation algorithm learns user patterns by discovering user interests. The proposed agent-based architecture was validated with the modeling and implementation of eAgora? application, which was illustrated at the pervasive university context.  相似文献   
15.
We present a student modeling approach that has been designed to be part of an Intelligent Virtual Environment for Training and/or Instruction (IVET). In order to provide the proper tutoring to a student, an IVET needs to keep and update dynamically a student model taking into account the student’s behaviour in the Virtual Environment. For that purpose, the proposed student model employs a student ontology, a pedagogic diagnosis module and a Conflict Solver module. The goal of the pedagogic diagnosis module is to infer which learning objectives have been acquired or not by the student. Nevertheless, the diagnosis process can be complicated by the fact that while learning the student will not only acquire new knowledge, but he/she may also forget some previously acquired knowledge, or he/she may have some oversights that could mislead the tutor about the true state of the student’s knowledge. All of these situations will lead to contradictions in the student model that must be solved so that the diagnosis can continue. Thus, our approach consists in applying diagnosis rules until a contradiction arises. At that moment, a conflict solver module is responsible of classifying and solving the contradiction. Next, the student ontology is updated according to the resolution adopted by the Conflict Solver and the diagnosis can continue. This paper mainly focuses on the design of the proper mechanisms of the student model to deal with the non monotonic nature of the pedagogic diagnosis.  相似文献   
16.
The aim of this study is to identify and prioritize the solutions of Knowledge Management (KM) adoption in Supply Chain (SC) to overcome its barriers. It helps organizations to concentrate on high rank solutions and develop strategies to implement them on priority. This paper proposes a framework based on fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance by similarity to ideal solution (TOPSIS) to identify and rank the solutions of KM adoption in SC and overcome its barriers. The AHP is used to determine weights of the barriers as criteria, and fuzzy TOPSIS method is used to obtain final ranking of the solutions of KM adoption in SC. The empirical case study analysis of an Indian hydraulic valve manufacturing organization is conducted to illustrate the use of the proposed framework for ranking the solutions of KM adoption in SC to overcome its barriers. This proposed framework provides a more accurate, effective and systematic decision support tool for stepwise implementation of the solutions of KM adoption in SC to increase its success rate.  相似文献   
17.
The processes of logistics service providers are considered as highly human-centric, flexible and complex. Deviations from the standard operating procedures as described in the designed process models, are not uncommon and may result in significant uncertainties. Acquiring insight in the dynamics of the actual logistics processes can effectively assist in mitigating the uncovered risks and creating strategic advantages, which are the result of uncertainties with respectively a negative and a positive impact on the organizational objectives.In this paper a comprehensive methodology for applying process mining in logistics is presented, covering the event log extraction and preprocessing as well as the execution of exploratory, performance and conformance analyses. The applicability of the presented methodology and roadmap is demonstrated with a case study at an important Chinese port that specializes in bulk cargo.  相似文献   
18.
Time plays important roles in Web search, because most Web pages contain temporal information and a lot of Web queries are time-related. How to integrate temporal information in Web search engines has been a research focus in recent years. However, traditional search engines have little support in processing temporal-textual Web queries. Aiming at solving this problem, in this paper, we concentrate on the extraction of the focused time for Web pages, which refers to the most appropriate time associated with Web pages, and then we used focused time to improve the search efficiency for time-sensitive queries. In particular, three critical issues are deeply studied in this paper. The first issue is to extract implicit temporal expressions from Web pages. The second one is to determine the focused time among all the extracted temporal information, and the last issue is to integrate focused time into a search engine. For the first issue, we propose a new dynamic approach to resolve the implicit temporal expressions in Web pages. For the second issue, we present a score model to determine the focused time for Web pages. Our score model takes into account both the frequency of temporal information in Web pages and the containment relationship among temporal information. For the third issue, we combine the textual similarity and the temporal similarity between queries and documents in the ranking process. To evaluate the effectiveness and efficiency of the proposed approaches, we build a prototype system called Time-Aware Search Engine (TASE). TASE is able to extract both the explicit and implicit temporal expressions for Web pages, and calculate the relevant score between Web pages and each temporal expression, and re-rank search results based on the temporal-textual relevance between Web pages and queries. Finally, we conduct experiments on real data sets. The results show that our approach has high accuracy in resolving implicit temporal expressions and extracting focused time, and has better ranking effectiveness for time-sensitive Web queries than its competitor algorithms.  相似文献   
19.
A concept lattice is an ordered structure between concepts. It is particularly effective in mining association rules. However, a concept lattice is not efficient for large databases because the lattice size increases with the number of transactions. Finding an efficient strategy for dynamically updating the lattice is an important issue for real-world applications, where new transactions are constantly inserted into databases. To build an efficient storage structure for mining association rules, this study proposes a method for building the initial frequent closed itemset lattice from the original database. The lattice is updated when new transactions are inserted. The number of database rescans over the entire database is reduced in the maintenance process. The proposed algorithm is compared with building a lattice in batch mode to demonstrate the effectiveness of the proposed algorithm.  相似文献   
20.
The weak signal concept according to Ansoff has the aim to advance strategic early warning. It enables to predict the appearance of events in advance that are relevant for an organization. An example is to predict the appearance of a new and relevant technology for a research organization. Existing approaches detect weak signals based on an environmental scanning procedure that considers textual information from the internet. This is because about 80% of all data in the internet are textual information. The texts are processed by a specific clustering approach where clusters that represent weak signals are identified. In contrast to these related approaches, we propose a new methodology that investigates a sequence of clusters measured at successive points in time. This enables to trace the development of weak signals over time and thus, it enables to identify relevant weak signal developments for organization’s decision making in strategic early warning environment.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号